这项工作介绍了使用伪层作为费米子决定因素的随机估计量的费米子晶状体理论中基于流动采样的量规均值架构。这是最先进的晶格场理论计算中的默认方法,这使得对流向模型在QCD等理论的实际应用至关重要。还概述了通过标准技术(例如/奇数预处理和HasenBusch分解)来改进基于流的采样方法的方法。提供了二维U(1)和SU(3)具有$ n_f = 2 $ FERMIONS的量规理论的数值演示。
translated by 谷歌翻译
原始的“七个图案”阐述了科学计算领域的基本方法的路线图,其中图案是一种捕获计算和数据移动模式的算法方法。我们介绍了“仿真智力的九个主题”,是一种开发和整合的路线图,以合并科学计算,科学模拟和人工智能所必需的基本算法。我们称之为合并模拟智能(SI),短暂。我们认为模拟智能的主题是相互连接的和相互依存的,很像操作系统层中的组件一样。使用这种隐喻,我们探讨了模拟智能操作系统堆栈(Si-Stack)和其中图案的各层的性质:(1)多种物理和多尺度建模; (2)替代建模和仿真; (3)基于仿真的推理; (4)因果建模和推理; (5)基于代理的建模; (6)概率编程; (7)可微分的编程; (8)开放式优化; (9)机器编程。我们相信图案之间的协调努力提供了加速科学发现的巨大机会,从综合生物和气候科学中解决逆问题,指导核能实验,并预测社会经济环境中的紧急行为。我们详细说明了Si-stack的每层,详细说明了最先进的方法,提出了示例以突出挑战和机遇,并倡导具体的方法来推进主题和与其组合的协同作用。推进和整合这些技术可以实现稳健且有效的假设仿真 - 分析类型的科学方法,我们用几种使用案例为人机组合和自动化学介绍。
translated by 谷歌翻译
基于标准化流的算法是由于有希望的机器学习方法,以便以可以使渐近精确的方式采样复杂的概率分布。在格子场理论的背景下,原则上的研究已经证明了这种方法对标量理论,衡量理论和统计系统的有效性。这项工作开发了能够使用动力学蜕皮的基于流动的理论采样的方法,这对于应用于粒子物理标准模型和许多冷凝物系的晶格场理论研究是必要的。作为一种实践演示,这些方法应用于通过Yukawa相互作用耦合到标量场的无大量交错的费米子的二维理论的现场配置的采样。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Language models have become increasingly popular in recent years for tasks like information retrieval. As use-cases become oriented toward specific domains, fine-tuning becomes default for standard performance. To fine-tune these models for specific tasks and datasets, it is necessary to carefully tune the model's hyperparameters and training techniques. In this paper, we present an in-depth analysis of the performance of four transformer-based language models on the task of biomedical information retrieval. The models we consider are DeepMind's RETRO (7B parameters), GPT-J (6B parameters), GPT-3 (175B parameters), and BLOOM (176B parameters). We compare their performance on the basis of relevance, accuracy, and interpretability, using a large corpus of 480000 research papers on protein structure/function prediction as our dataset. Our findings suggest that smaller models, with <10B parameters and fine-tuned on domain-specific datasets, tend to outperform larger language models on highly specific questions in terms of accuracy, relevancy, and interpretability by a significant margin (+50% on average). However, larger models do provide generally better results on broader prompts.
translated by 谷歌翻译
Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such counterfactual data often has a limited scale and diversity if crowdsourced and is computationally expensive to extend to new perturbation types if generated using supervised methods. To address this, we introduce a new framework called DISCO for automatically generating high-quality counterfactual data at scale. DISCO engineers prompts to generate phrasal perturbations with a large general language model. Then, a task-specific teacher model filters the generation to distill high-quality counterfactual data. We show that learning with this counterfactual data yields a comparatively small student model that is 6% (absolute) more robust and generalizes 5% better across distributions than baselines on various challenging evaluations. This model is also 15% more sensitive in differentiating original and counterfactual examples, on three evaluation sets written by human workers and via human-AI collaboration.
translated by 谷歌翻译
Multi-document summarization (MDS) has traditionally been studied assuming a set of ground-truth topic-related input documents is provided. In practice, the input document set is unlikely to be available a priori and would need to be retrieved based on an information need, a setting we call open-domain MDS. We experiment with current state-of-the-art retrieval and summarization models on several popular MDS datasets extended to the open-domain setting. We find that existing summarizers suffer large reductions in performance when applied as-is to this more realistic task, though training summarizers with retrieved inputs can reduce their sensitivity retrieval errors. To further probe these findings, we conduct perturbation experiments on summarizer inputs to study the impact of different types of document retrieval errors. Based on our results, we provide practical guidelines to help facilitate a shift to open-domain MDS. We release our code and experimental results alongside all data or model artifacts created during our investigation.
translated by 谷歌翻译
Language tasks involving character-level manipulations (e.g., spelling correction, many word games) are challenging for models based in subword tokenization. To address this, we adapt the interchange intervention training method of Geiger et al. (2021) to operate on type-level variables over characters. This allows us to encode robust, position-independent character-level information in the internal representations of subword-based models. We additionally introduce a suite of character-level tasks that systematically vary in their dependence on meaning and sequence-level context. While simple character-level tokenization approaches still perform best on purely form-based tasks like string reversal, our method is superior for more complex tasks that blend form, meaning, and context, such as spelling correction in context and word search games. Our approach also leads to subword-based models with human-intepretable internal representations of characters.
translated by 谷歌翻译
In data-driven systems, data exploration is imperative for making real-time decisions. However, big data is stored in massive databases that are difficult to retrieve. Approximate Query Processing (AQP) is a technique for providing approximate answers to aggregate queries based on a summary of the data (synopsis) that closely replicates the behavior of the actual data, which can be useful where an approximate answer to the queries would be acceptable in a fraction of the real execution time. In this paper, we discuss the use of Generative Adversarial Networks (GANs) for generating tabular data that can be employed in AQP for synopsis construction. We first discuss the challenges associated with constructing synopses in relational databases and then introduce solutions to those challenges. Following that, we organized statistical metrics to evaluate the quality of the generated synopses. We conclude that tabular data complexity makes it difficult for algorithms to understand relational database semantics during training, and improved versions of tabular GANs are capable of constructing synopses to revolutionize data-driven decision-making systems.
translated by 谷歌翻译
Trajectory-User Linking (TUL) is a relatively new mobility classification task in which anonymous trajectories are linked to the users who generated them. With applications ranging from personalized recommendations to criminal activity detection, TUL has received increasing attention over the past five years. While research has focused mainly on learning deep representations that capture complex spatio-temporal mobility patterns unique to individual users, we demonstrate that visit patterns are highly unique among users and thus simple heuristics applied directly to the raw data are sufficient to solve TUL. More specifically, we demonstrate that a single check-in per trajectory is enough to correctly predict the identity of the user up to 85% of the time. Moreover, by using a non-parametric classifier, we scale up TUL to over 100k users which is an increase over state-of-the-art by three orders of magnitude. Extensive empirical analysis on four real-world datasets (Brightkite, Foursquare, Gowalla and Weeplaces) compares our findings to state-of-the-art results, and more importantly validates our claim that TUL is easier than commonly believed.
translated by 谷歌翻译